Token counting enables you to determine the number of tokens in a message before sending it to Claude, helping you make informed decisions about your prompts and usage. With token counting, you can
The token counting endpoint accepts the same structured list of inputs for creating a message, including support for system prompts, tools, images, and PDFs. The response contains the total number of input tokens.
The token count should be considered an estimate. In some cases, the actual number of input tokens used when creating a message may differ by a small amount.
Server tool token counts only apply to the first sampling call.
import anthropicclient = anthropic.Anthropic()response = client.messages.count_tokens( model="claude-opus-4-20250514", tools=[{"name":"get_weather","description":"Get the current weather in a given location","input_schema":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA",}},"required":["location"],},}], messages=[{"role":"user","content":"What's the weather like in San Francisco?"}])print(response.json())
See here for more details about how the context window is calculated with extended thinking
Thinking blocks from previous assistant turns are ignored and do not count toward your input tokens
Current assistant turn thinking does count toward your input tokens
curl https://api.anthropic.com/v1/messages/count_tokens \--header"x-api-key: $ANTHROPIC_API_KEY"\--header"content-type: application/json"\--header"anthropic-version: 2023-06-01"\--data '{"model":"claude-opus-4-20250514","thinking":{"type":"enabled","budget_tokens":16000},"messages":[{"role":"user","content":"Are there an infinite number of prime numbers such that n mod 4 == 3?"},{"role":"assistant","content":[{"type":"thinking","thinking":"This is a nice number theory question. Lets think about it step by step...","signature":"EuYBCkQYAiJAgCs1le6/Pol5Z4/JMomVOouGrWdhYNsH3ukzUECbB6iWrSQtsQuRHJID6lWV..."},{"type":"text","text":"Yes, there are infinitely many prime numbers p such that p mod 4 = 3..."}]},{"role":"user","content":"Can you write a formal proof?"}]}'
Token counting is free to use but subject to requests per minute rate limits based on your usage tier. If you need higher limits, contact sales through the Anthropic Console.
Usage tier
Requests per minute (RPM)
1
100
2
2,000
3
4,000
4
8,000
Token counting and message creation have separate and independent rate limits — usage of one does not count against the limits of the other.
No, token counting provides an estimate without using caching logic. While you may provide cache_control blocks in your token counting request, prompt caching only occurs during actual message creation.